43 research outputs found

    A comparison of Bayesian localization methods in the presence of outliers

    Get PDF
    Localization of a user in a wireless network is challenging in the presence of malfunctioning or malicious reference nodes, since if they are not accounted for, large localization errors can ensue. We evaluate three Bayesian methods to statistically identify outliers during localization: an exact method, an expectation maximization (EM) method proposed earlier, and a new method based on Variational Bayesian EM (VBEM). Simulation results indicate similar performance for the latter two schemes, with the VBEM algorithm able to provide a statistical description of the user location, rather than an estimate as in the simpler EM case. In contrast to previous studies, we find that there is a significant gap between the approximate methods and the exact method, the cause of which is discussed

    Convergent Communication, Sensing and Localization in 6G Systems: An Overview of Technologies, Opportunities and Challenges

    Get PDF
    Herein, we focus on convergent 6G communication, localization and sensing systems by identifying key technology enablers, discussing their underlying challenges, implementation issues, and recommending potential solutions. Moreover, we discuss exciting new opportunities for integrated localization and sensing applications, which will disrupt traditional design principles and revolutionize the way we live, interact with our environment, and do business. Regarding potential enabling technologies, 6G will continue to develop towards even higher frequency ranges, wider bandwidths, and massive antenna arrays. In turn, this will enable sensing solutions with very fine range, Doppler, and angular resolutions, as well as localization to cm-level degree of accuracy. Besides, new materials, device types, and reconfigurable surfaces will allow network operators to reshape and control the electromagnetic response of the environment. At the same time, machine learning and artificial intelligence will leverage the unprecedented availability of data and computing resources to tackle the biggest and hardest problems in wireless communication systems. As a result, 6G will be truly intelligent wireless systems that will provide not only ubiquitous communication but also empower high accuracy localization and high-resolution sensing services. They will become the catalyst for this revolution by bringing about a unique new set of features and service capabilities, where localization and sensing will coexist with communication, continuously sharing the available resources in time, frequency, and space. This work concludes by highlighting foundational research challenges, as well as implications and opportunities related to privacy, security, and trust

    Measurement of the azimuthal anisotropy of Y(1S) and Y(2S) mesons in PbPb collisions at √S^{S}NN = 5.02 TeV

    Get PDF
    The second-order Fourier coefficients (υ2_{2}) characterizing the azimuthal distributions of ΄(1S) and ΄(2S) mesons produced in PbPb collisions at sNN\sqrt{s_{NN}} = 5.02 TeV are studied. The ΄mesons are reconstructed in their dimuon decay channel, as measured by the CMS detector. The collected data set corresponds to an integrated luminosity of 1.7 nb−1^{-1}. The scalar product method is used to extract the υ2_{2} coefficients of the azimuthal distributions. Results are reported for the rapidity range |y| < 2.4, in the transverse momentum interval 0 < pT_{T} < 50 GeV/c, and in three centrality ranges of 10–30%, 30–50% and 50–90%. In contrast to the J/ψ mesons, the measured υ2_{2} values for the ΄ mesons are found to be consistent with zero

    Measurement of prompt D0^{0} and D‟\overline{D}0^{0} meson azimuthal anisotropy and search for strong electric fields in PbPb collisions at root SNN\sqrt{S_{NN}} = 5.02 TeV

    Get PDF
    The strong Coulomb field created in ultrarelativistic heavy ion collisions is expected to produce a rapiditydependent difference (Av2) in the second Fourier coefficient of the azimuthal distribution (elliptic flow, v2) between D0 (uc) and D0 (uc) mesons. Motivated by the search for evidence of this field, the CMS detector at the LHC is used to perform the first measurement of Av2. The rapidity-averaged value is found to be (Av2) = 0.001 ? 0.001 (stat)? 0.003 (syst) in PbPb collisions at ?sNN = 5.02 TeV. In addition, the influence of the collision geometry is explored by measuring the D0 and D0mesons v2 and triangular flow coefficient (v3) as functions of rapidity, transverse momentum (pT), and event centrality (a measure of the overlap of the two Pb nuclei). A clear centrality dependence of prompt D0 meson v2 values is observed, while the v3 is largely independent of centrality. These trends are consistent with expectations of flow driven by the initial-state geometry. ? 2021 The Author. Published by Elsevier B.V. This is an open access article under the CC BY licens

    Performance of the CMS Level-1 trigger in proton-proton collisions at √s = 13 TeV

    Get PDF
    At the start of Run 2 in 2015, the LHC delivered proton-proton collisions at a center-of-mass energy of 13\TeV. During Run 2 (years 2015–2018) the LHC eventually reached a luminosity of 2.1× 1034^{34} cm−2^{-2}s−1^{-1}, almost three times that reached during Run 1 (2009–2013) and a factor of two larger than the LHC design value, leading to events with up to a mean of about 50 simultaneous inelastic proton-proton collisions per bunch crossing (pileup). The CMS Level-1 trigger was upgraded prior to 2016 to improve the selection of physics events in the challenging conditions posed by the second run of the LHC. This paper describes the performance of the CMS Level-1 trigger upgrade during the data taking period of 2016–2018. The upgraded trigger implements pattern recognition and boosted decision tree regression techniques for muon reconstruction, includes pileup subtraction for jets and energy sums, and incorporates pileup-dependent isolation requirements for electrons and tau leptons. In addition, the new trigger calculates high-level quantities such as the invariant mass of pairs of reconstructed particles. The upgrade reduces the trigger rate from background processes and improves the trigger efficiency for a wide variety of physics signals

    Measurement of the Y(1S) pair production cross section and search for resonances decaying to Y(1S)ÎŒâșΌ⁻ in proton-proton collisions at √s = 13 TeV

    Get PDF
    The fiducial cross section for Y(1S)pair production in proton-proton collisions at a center-of-mass energy of 13TeVin the region where both Y(1S)mesons have an absolute rapidity below 2.0 is measured to be 79 ± 11 (stat) ±6 (syst) ±3 (B)pbassuming the mesons are produced unpolarized. The last uncertainty corresponds to the uncertainty in the Y(1S)meson dimuon branching fraction. The measurement is performed in the final state with four muons using proton-proton collision data collected in 2016 by the CMS experiment at the LHC, corresponding to an integrated luminosity of 35.9fb−1^{-1}. This process serves as a standard model reference in a search for narrow resonances decaying to Y(1S)ÎŒ+^{+}Ό−^{-} in the same final state. Such a resonance could indicate the existence of a tetraquark that is a bound state of two bquarks and two b̅ antiquarks. The tetraquark search is performed for masses in the vicinity of four times the bottom quark mass, between 17.5 and 19.5GeV, while a generic search for other resonances is performed for masses between 16.5 and 27GeV. No significant excess of events compatible with a narrow resonance is observed in the data. Limits on the production cross section times branching fraction to four muons via an intermediate Y(1S)resonance are set as a function of the resonance mass

    Pileup mitigation at CMS in 13 TeV data

    Get PDF
    With increasing instantaneous luminosity at the LHC come additional reconstruction challenges. At high luminosity, many collisions occur simultaneously within one proton-proton bunch crossing. The isolation of an interesting collision from the additional "pileup" collisions is needed for effective physics performance. In the CMS Collaboration, several techniques capable of mitigating the impact of these pileup collisions have been developed. Such methods include charged-hadron subtraction, pileup jet identification, isospin-based neutral particle "ÎŽÎČ" correction, and, most recently, pileup per particle identification. This paper surveys the performance of these techniques for jet and missing transverse momentum reconstruction, as well as muon isolation. The analysis makes use of data corresponding to 35.9 fb−1^{-1} collected with the CMS experiment in 2016 at a center-of-mass energy of 13 TeV. The performance of each algorithm is discussed for up to 70 simultaneous collisions per bunch crossing. Significant improvements are found in the identification of pileup jets, the jet energy, mass, and angular resolution, missing transverse momentum resolution, and muon isolation when using pileup per particle identification

    Studies of charm and beauty hadron long-range correlations in pp and pPb collisions at LHC energies

    Get PDF

    Identification of heavy, energetic, hadronically decaying particles using machine-learning techniques

    Get PDF
    Machine-learning (ML) techniques are explored to identify and classify hadronic decays of highly Lorentz-boosted W/Z/Higgs bosons and top quarks. Techniques without ML have also been evaluated and are included for comparison. The identification performances of a variety of algorithms are characterized in simulated events and directly compared with data. The algorithms are validated using proton-proton collision data at √s = 13TeV, corresponding to an integrated luminosity of 35.9 fb−1. Systematic uncertainties are assessed by comparing the results obtained using simulation and collision data. The new techniques studied in this paper provide significant performance improvements over non-ML techniques, reducing the background rate by up to an order of magnitude at the same signal efficiency
    corecore